60 research outputs found

    SMO-based pruning methods for sparse least squares support vector machines

    Get PDF
    Solutions of least squares support vector machines (LS-SVMs) are typically nonsparse. The sparseness is imposed by subsequently omitting data that introduce the smallest training errors and retraining the remaining data. Iterative retraining requires more intensive computations than training a single nonsparse LS-SVM. In this paper, we propose a new pruning algorithm for sparse LS-SVMs: the sequential minimal optimization (SMO) method is introduced into pruning process; in addition, instead of determining the pruning points by errors, we omit the data points that will introduce minimum changes to a dual objective function. This new criterion is computationally efficient. The effectiveness of the proposed method in terms of computational cost and classification accuracy is demonstrated by numerical experiments

    Multi-Channel Magnetocardiogardiography System Based on Low-Tc SQUIDs in an Unshielded Environment

    Get PDF
    AbstractMagnetocardiography (MCG) using superconducting quantum interference devices (SQUIDs) is a new medical diagnostic tool measuring biomagnetic signals that are generated by the electrical activity of the human heart. This technique is completely passive, contactless, and it has an advantage in the early diagnosis of heart diseases. We developed the first unshielded four-channel MCG system based on low-Tc DC SQUIDs in China. Instead of using a costly magnetically shielded room, the environmental noise suppression was realized by using second-order gradiometers and three-axis reference magnetometer. The measured magnetic field resolution of the system is better than 1 pT, and multi-cycle human heart signals can be recorded directly. Also, with the infrared positioning system, 48 points data collection can be realized by moving the non-magnetic bed nine times

    A new framework for host-pathogen interaction research

    Get PDF
    COVID-19 often manifests with different outcomes in different patients, highlighting the complexity of the host-pathogen interactions involved in manifestations of the disease at the molecular and cellular levels. In this paper, we propose a set of postulates and a framework for systematically understanding complex molecular host-pathogen interaction networks. Specifically, we first propose four host-pathogen interaction (HPI) postulates as the basis for understanding molecular and cellular host-pathogen interactions and their relations to disease outcomes. These four postulates cover the evolutionary dispositions involved in HPIs, the dynamic nature of HPI outcomes, roles that HPI components may occupy leading to such outcomes, and HPI checkpoints that are critical for specific disease outcomes. Based on these postulates, an HPI Postulate and Ontology (HPIPO) framework is proposed to apply interoperable ontologies to systematically model and represent various granular details and knowledge within the scope of the HPI postulates, in a way that will support AI-ready data standardization, sharing, integration, and analysis. As a demonstration, the HPI postulates and the HPIPO framework were applied to study COVID-19 with the Coronavirus Infectious Disease Ontology (CIDO), leading to a novel approach to rational design of drug/vaccine cocktails aimed at interrupting processes occurring at critical host-coronavirus interaction checkpoints. Furthermore, the host-coronavirus protein-protein interactions (PPIs) relevant to COVID-19 were predicted and evaluated based on prior knowledge of curated PPIs and domain-domain interactions, and how such studies can be further explored with the HPI postulates and the HPIPO framework is discussed

    2dx_merge: Data management and merging for 2D crystal images

    No full text
    Electron crystallography of membrane proteins determines the structure of membrane-reconstituted and two-dimensionally (2D) crystallized membrane proteins by low-dose imaging with the transmission electron microscope, and computer image processing. We have previously presented the software system 2dx, for user-friendly image processing of 2D crystal images. Its central component 2dx_image is based on the MRC program suite, and allows the optionally fully automatic processing of one 2D crystal image. We present here the program 2dx_merge, which assists the user in the management of a 2D crystal image processing project, and facilitates the merging of the data from multiple images. The merged dataset can be used as a reference to re-process all images, which usually improves the resolution of the final reconstruction. Image processing and merging can be applied iteratively, until convergence is reached. 2dx is available under the GNU General Public License at http://2dx.org. (c) 2007 Elsevier Inc. All rights reserved

    A maximum likelihood approach to two-dimensional crystals

    No full text
    Maximum likelihood (ML) processing of transmission electron microscopy images of protein particles can produce reconstructions of superior resolution due to a reduced reference bias. We have investigated a ML processing approach to images centered on the unit cells of two-dimensional (2D) crystal images. The implemented software makes use of the predictive lattice node tracking in the MRC software, which is used to window particle stacks. These are then noise-whitened and subjected to ML processing. Resulting ML maps are translated into amplitudes and phases for further processing within the 2dx software package. Compared with ML processing for randomly oriented single particles, the required computational costs are greatly reduced as the 2D crystals restrict the parameter search space. The software was applied to images of negatively stained or frozen hydrated 2D crystals of different crystal order. We find that the ML algorithm is not free from reference bias, even though its sensitivity to noise correlation is lower than for pure cross-correlation alignment. Compared with crystallographic processing, the newly developed software yields better resolution for 2D crystal images of lower crystal quality, and it performs equally well for well-ordered crystal images

    2dx_merge : data management and merging for 2D crystal images

    No full text
    Electron crystallography of membrane proteins determines the structure of membrane-reconstituted and two-dimensionally (2D) crystallized membrane proteins by low-dose imaging with the transmission electron microscope, and computer image processing. We have previously presented the software system 2dx, for user-friendly image processing of 2D crystal images. Its central component 2dx_image is based on the MRC program suite, and allows the optionally fully automatic processing of one 2D crystal image. We present here the program 2dx_merge, which assists the user in the management of a 2D crystal image processing project, and facilitates the merging of the data from multiple images. The merged dataset can be used as a reference to re-process all images, which usually improves the resolution of the final reconstruction. Image processing and merging can be applied iteratively, until convergence is reached. 2dx is available under the GNU General Public License at http://2dx.org

    Recognition of Transportation State by Smartphone Sensors Using Deep Bi-LSTM Neural Network

    No full text
    Smartphones have been used for recognizing different transportation states. However, current studies focus on the speed of the object, which only relies on the GPS sensor rather than considering other suitable sensors and actual application factors. In this study, we propose a novel method that considers these factors comprehensively to enhance transportation state recognition. The deep Bi-LSTM (bidirectional long short-term memory) neural network structure, the crowd-sourcing model, and the TensorFlow deep learning system are used to classify the transportation states. Meanwhile, the data captured by the accelerometer and gyroscope sensors of smartphone is used to test and adjust the deep Bi-LSTM neural network model, making it easy to transfer the model into smartphones and conduct real-time recognition. The experimental results show that this study achieves transportation activity classification with an accuracy of up to 92.8%. The model of the deep Bi-LSTM neural network can be used for other time-series fields such as signal recognition and action analysis

    Malicious Domain Names Detection Algorithm Based on N-Gram

    No full text
    Malicious domain name attacks have become a serious issue for Internet security. In this study, a malicious domain names detection algorithm based on N-Gram is proposed. The top 100,000 domain names in Alexa 2013 are used in the N-Gram method. Each domain name excluding the top-level domain is segmented into substrings according to its domain level with the lengths of 3, 4, 5, 6, and 7. The substring set of the 100,000 domain names is established, and the weight value of a substring is calculated according to its occurrence number in the substring set. To detect a malicious attack, the domain name is also segmented by the N-Gram method and its reputation value is calculated based on the weight values of its substrings. Finally, the judgment of whether the domain name is malicious is made by thresholding. In the experiments on Alexa 2017 and Malware domain list, the proposed detection algorithm yielded an accuracy rate of 94.04%, a false negative rate of 7.42%, and a false positive rate of 6.14%. The time complexity is lower than other popular malicious domain names detection algorithms
    • …
    corecore